19 research outputs found

    Finite Element Modeling of Transverse Post-Tensioned Joints in Accelerated Bridge Construction (ABC) Full-Scale Precast Bridge Deck Panels

    Get PDF
    The Accelerated bridge construction (ABC) techniques are gaining popularity among the departments of transportation (DOTs) due to their reductions of on-site construction time and traffic delays. One ABC technique that utilizes precast deck panels has demonstrated some advantages over normal cast-in-place construction, but has also demonstrated some serviceability issues such as cracks and water leakage to the transverse joints. Some of these problems are addressed by applying longitudinal prestressing. This thesis evaluates the service and ultimate capacities in both flexure and shear, of the finite element models of the post-tensioned system currently used by Utah Department of Transportation (UDOT) and a proposed curved-bolt system to confirm the experimental results. The panels were built and tested under negative moment in order to investigate a known problem, namely, tension in the deck concrete. Shear tests were performed on specimens with geometry designed to investigate the effects of high shear across the joint. The curved-bolt connection not only provides the necessary compressive stress across the transverse joint but also makes future replacement of a single deck panel possible without replacing the entire deck. Load-deflection, shear-deflection curves were obtained using the experimental tests and were used to compare with the values obtained from finite element analysis. In flexure, the ultimate load predicted by the finite element model was lower than the experimental ultimate load by 1% for the post-tensioned connection and 3% for the curved-bolt connection. The shear models predicted the ultimate shear reached, within 5% of the experimental values. The cracking pattern also matched closely. The yield and cracking moment of the curved-bolt connection predicted by the finite element model were lower by 13% and 2%, respectively, compared to the post-tensioned connection in flexure

    Neuromorphic Architecture Optimization for Task-Specific Dynamic Learning

    Full text link
    The ability to learn and adapt in real time is a central feature of biological systems. Neuromorphic architectures demonstrating such versatility can greatly enhance our ability to efficiently process information at the edge. A key challenge, however, is to understand which learning rules are best suited for specific tasks and how the relevant hyperparameters can be fine-tuned. In this work, we introduce a conceptual framework in which the learning process is integrated into the network itself. This allows us to cast meta-learning as a mathematical optimization problem. We employ DeepHyper, a scalable, asynchronous model-based search, to simultaneously optimize the choice of meta-learning rules and their hyperparameters. We demonstrate our approach with two different datasets, MNIST and FashionMNIST, using a network architecture inspired by the learning center of the insect brain. Our results show that optimal learning rules can be dataset-dependent even within similar tasks. This dependency demonstrates the importance of introducing versatility and flexibility in the learning algorithms. It also illuminates experimental findings in insect neuroscience that have shown a heterogeneity of learning rules within the insect mushroom body

    Unified Probabilistic Neural Architecture and Weight Ensembling Improves Model Robustness

    Full text link
    Robust machine learning models with accurately calibrated uncertainties are crucial for safety-critical applications. Probabilistic machine learning and especially the Bayesian formalism provide a systematic framework to incorporate robustness through the distributional estimates and reason about uncertainty. Recent works have shown that approximate inference approaches that take the weight space uncertainty of neural networks to generate ensemble prediction are the state-of-the-art. However, architecture choices have mostly been ad hoc, which essentially ignores the epistemic uncertainty from the architecture space. To this end, we propose a Unified probabilistic architecture and weight ensembling Neural Architecture Search (UraeNAS) that leverages advances in probabilistic neural architecture search and approximate Bayesian inference to generate ensembles form the joint distribution of neural network architectures and weights. The proposed approach showed a significant improvement both with in-distribution (0.86% in accuracy, 42% in ECE) CIFAR-10 and out-of-distribution (2.43% in accuracy, 30% in ECE) CIFAR-10-C compared to the baseline deterministic approach

    A Modular Deep Learning Pipeline for Galaxy-Scale Strong Gravitational Lens Detection and Modeling

    Full text link
    Upcoming large astronomical surveys are expected to capture an unprecedented number of strong gravitational lensing systems in the Universe. Deep learning is emerging as a promising practical tool in detection and quantification of these galaxy-scale image distortions. However, absence of large quantities of representative data from current astronomical surveys requires development of robust forward modeling of synthetic lensing images. Using a realistic and unbiased sample of the strong lenses created by using state-of-the-art extragalactic catalogs, we train a modular deep learning pipeline for uncertainty-quantified detection and modeling with intermediate image processing components for denoising and deblending the lensing systems. We demonstrate a higher degree of interpretability and controlled systematics due to domain-specific task modules that are trained with different stages of synthetic image generation. For lens detection and modeling, we obtain semantically meaningful latent spaces that separate classes and provide uncertainty estimates that explain the misclassified images and provide uncertainty bounds on the lens parameters. In addition, we obtain an improved performance---lens detection (classification) improved from 82% with the baseline to 94%, while the lens modeling (regression) accuracy improved by 25% over the baseline model

    Towards Continually Learning Application Performance Models

    Full text link
    Machine learning-based performance models are increasingly being used to build critical job scheduling and application optimization decisions. Traditionally, these models assume that data distribution does not change as more samples are collected over time. However, owing to the complexity and heterogeneity of production HPC systems, they are susceptible to hardware degradation, replacement, and/or software patches, which can lead to drift in the data distribution that can adversely affect the performance models. To this end, we develop continually learning performance models that account for the distribution drift, alleviate catastrophic forgetting, and improve generalizability. Our best model was able to retain accuracy, regardless of having to learn the new distribution of data inflicted by system changes, while demonstrating a 2x improvement in the prediction accuracy of the whole data sequence in comparison to the naive approach.Comment: Presented at Workshop on Machine Learning for Systems at 36th Conference on Neural Information Processing Systems (NeurIPS 2022

    Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

    Full text link
    The information bottleneck framework provides a systematic approach to learning representations that compress nuisance information in the input and extract semantically meaningful information about predictions. However, the choice of a prior distribution that fixes the dimensionality across all the data can restrict the flexibility of this approach for learning robust representations. We present a novel sparsity-inducing spike-slab categorical prior that uses sparsity as a mechanism to provide the flexibility that allows each data point to learn its own dimension distribution. In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in the latent space. Through a series of experiments using in-distribution and out-of-distribution learning scenarios on the MNIST, CIFAR-10, and ImageNet data, we show that the proposed approach improves accuracy and robustness compared to traditional fixed-dimensional priors, as well as other sparsity induction mechanisms for latent variable models proposed in the literature
    corecore